Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Journal of Educational Evaluation for Health Professions ; : 39-2020.
Article in English | WPRIM | ID: wpr-899264

ABSTRACT

Purpose@#Consistent evaluation procedures based on objective and rational standards are essential for the sustainability of portfolio-based education, which has been widely introduced in medical education. We aimed to develop and implement a portfolio assessment system, and to assess its validity and reliability. @*Methods@#We developed a portfolio assessment system from March 2019 to August 2019 and confirmed its content validity through expert assessment by an expert group comprising 2 medical education specialists, 2 professors involved in education at medical school, and a professor of basic medical science. Six trained assessors conducted 2 rounds of evaluation of 7 randomly selected portfolios for the “Self-Development and Portfolio II” course from January 2020 to July 2020. These data are used inter-rater reliability was evaluated using intra-class correlation coefficients (ICCs) in September 2020. @*Results@#The portfolio assessment system is based on the following process; assessor selection, training, analytical/comprehensive evaluation, and consensus. Appropriately trained assessors evaluated portfolios based on specific assessment criteria and a rubric for assigning points. In the analysis of inter-rater reliability, the first round of evaluation grades was submitted, and all assessment areas except “goal-setting” showed a high ICC of 0.81 or higher. After the first round of assessment, we attempted to standardize objective assessment procedures. As a result, all components of the assessments showed close correlations, with ICCs of 0.81 or higher. @*Conclusion@#We confirmed that when assessors with an appropriate training conduct portfolio assessment based on specified standards through a systematic procedure, the results are reliable.

2.
Journal of Educational Evaluation for Health Professions ; : 39-2020.
Article in English | WPRIM | ID: wpr-891560

ABSTRACT

Purpose@#Consistent evaluation procedures based on objective and rational standards are essential for the sustainability of portfolio-based education, which has been widely introduced in medical education. We aimed to develop and implement a portfolio assessment system, and to assess its validity and reliability. @*Methods@#We developed a portfolio assessment system from March 2019 to August 2019 and confirmed its content validity through expert assessment by an expert group comprising 2 medical education specialists, 2 professors involved in education at medical school, and a professor of basic medical science. Six trained assessors conducted 2 rounds of evaluation of 7 randomly selected portfolios for the “Self-Development and Portfolio II” course from January 2020 to July 2020. These data are used inter-rater reliability was evaluated using intra-class correlation coefficients (ICCs) in September 2020. @*Results@#The portfolio assessment system is based on the following process; assessor selection, training, analytical/comprehensive evaluation, and consensus. Appropriately trained assessors evaluated portfolios based on specific assessment criteria and a rubric for assigning points. In the analysis of inter-rater reliability, the first round of evaluation grades was submitted, and all assessment areas except “goal-setting” showed a high ICC of 0.81 or higher. After the first round of assessment, we attempted to standardize objective assessment procedures. As a result, all components of the assessments showed close correlations, with ICCs of 0.81 or higher. @*Conclusion@#We confirmed that when assessors with an appropriate training conduct portfolio assessment based on specified standards through a systematic procedure, the results are reliable.

3.
Journal of Educational Evaluation for Health Professions ; : 20-2019.
Article in Korean | WPRIM | ID: wpr-937905

ABSTRACT

Purpose@#This study was conducted to identify suggestions for improving the effectiveness and promoting the success of the current problem-based learning (PBL) program at the Catholic University of Korea College of Medicine through a professor and student awareness survey. @*Methods@#A survey was carried out by sending out mobile Naver Form survey pages via text messages 3 times in December 2018, to 44 medical students and 74 professors. In addition, relevant official documents from the school administration were reviewed. The collected data were analyzed to identify the achievement of educational goals, overall satisfaction with, and operational suitability of the PBL program. @*Results@#The overall satisfaction scores for the PBL program were neutral (students, 3.27±0.95 vs. professors, 3.58±1.07; P=0.118). Regarding the achievement of educational goals, the integration of basic and clinical medicine and encouragement of learning motivation were ranked lowest. Many respondents expressed negative opinions about the modules (students, 25.0%; professors, 39.2%) and tutors (students, 54.5%; professors, 24.3%). The students and professors agreed that the offering timing of the program in medical school and the length of each phase were suitable, while opinions expressed in greater detail pointed to issues such as the classes being held too close to exams and their alignment with regular course units. @*Conclusion@#Issues with modules and tutors were the most pressing. Detailed and appropriate modules should be developed on the basis of advice from professors with experience in PBL tutoring. Inconsistencies in tutoring should be reduced by standardization and retraining.

4.
Journal of Educational Evaluation for Health Professions ; : 38-2019.
Article in Korean | WPRIM | ID: wpr-937888

ABSTRACT

Purpose@#We have developed and operated a portfolio-based course aimed at strengthening pre-medical students’ capabilities for self-management and self-improvement. In order to determine the effectiveness of the course and to establish future operational strategies, we evaluated the course and the students’ learning experience. @*Methods@#The subjects of this study were 97 students of a pre-medical course “Self-development and portfolio I” in 2019. Their learning experience was evaluated through the professor’s assessment of portfolios they had submitted, and the program was evaluated based on the responses of 68 students who completed a survey. The survey questionnaire included 32 items. Descriptive statistics were reported for quantitative data, including the mean and standard deviation. Opinions collected from the open-ended question were grouped into categories. @*Results@#The evaluation of students’ portfolios showed that only 6.2% of the students’ portfolios were well-organized, with specific goals, strategies, processes, and self-reflections, while most lacked the basic components of a portfolio (46.4%) or contained insufficient content (47.4%). Students’ responses to the survey showed that regular portfolio personality assessments (72.1%), team (64.7%), and individual (60.3%) activities were felt to be more appropriate as educational methods for this course, rather than lectures. Turning to the portfolio creation experience, the forms and components of the portfolios (68.2%) and the materials provided (62.2%) were felt to be appropriate. However, students felt that individual autonomy needed to be reflected more (66.7%) and that this course interfered with other studies (42.5%). @*Conclusion@#The findings of this study suggest that standardized samples, guidelines, and sufficient time for autonomous portfolio creation should be provided. In addition, education on portfolio utilization should be conducted in small groups in the future.

5.
Korean Medical Education Review ; (3): 129-133, 2017.
Article in Korean | WPRIM | ID: wpr-760412

ABSTRACT

Premed education in the college of medicine at the Catholic University of Korea aims to promote student creativity and excellence in accordance with the mission of the college: to have a sense of calling, leadership, and competency. The Catholic Medical College premed curriculum includes 75 credits which are composed of 65 credits for required courses and 10 credits for elective courses. It consists of courses in basic science, medical science, liberal arts and humanities (premedical OMNIBUS). It also involves community programs in ‘Vision and Mission,’ ‘Leadership Training,’ and ‘Academic Conference.’ In addition, students are allowed self-directed choice of their courses and learning for one quarter.


Subject(s)
Humans , Creativity , Curriculum , Education , Education, Premedical , Humanities , Korea , Leadership , Learning
6.
Korean Journal of Medical Education ; : 255-265, 2015.
Article in Korean | WPRIM | ID: wpr-204389

ABSTRACT

PURPOSE: The purpose of this study was to develop criteria to evaluate a premedical curriculum to ultimately improve the quality of premedical education. METHODS: The first draft of the evaluation criteria was developed through a literature review and expert consultation. The Delphi survey was conducted to ensure the validity of the draft. RESULTS: The final premedical curriculum criteria consisted of three evaluation areas (curriculum development, curriculum implementation, and curriculum outcome), five evaluation items (educational objective, organization of curriculum, instructional method, class management, and educational outcome), and 18 evaluation indicators. CONCLUSION: There should be further discussion on the evaluation questionnaire and the content for each evaluation indicator with regard to its practical application. Also, a concrete evaluation system, including evaluation standards and rating scales, should be developed.


Subject(s)
Humans , Curriculum/standards , Delphi Technique , Education, Premedical/standards , Program Evaluation/methods , Reproducibility of Results
7.
Korean Journal of Medical Education ; : 11-18, 2015.
Article in Korean | WPRIM | ID: wpr-69916

ABSTRACT

PURPOSE: The purpose of this study is to evaluate the effect of the peer review in an integrated curriculum and to guide further improvements of curriculum. METHODS: In 2012, Seoul National University College of Medicine implemented a peer review system for 11 courses in an integrated curriculum. For each lecture, two reviewers conducted the rating using a 10-item questionnaire on a 4-point scale. We analyzed the correlation between total scores and each item and the inter-rater reliability between the two reviewers by Pearson correlation. Further, the link between peer review scores and the student lecture evaluation was analyzed. RESULTS: The mean total score for the checklist rating was 31.3 (out of 40.0), and the mean score for each item ranged from 2.65 to 3.35 (out of 4.00). The correlation coefficient between the total score and each item was high, ranging from 0.656 to 0.849, except for three items. The mean of difference scores between reviewers was 5.03, and the correlation coefficient was significantly high, which ranged from 0.968 to 0.999. The peer reviews scores and student lecture evaluations generally correlated, but there were some outlying exceptions; the correlation coefficient was 0.105 and 0.093. CONCLUSION: Peer review is a useful method for improving the quality of lectures in an integrated curriculum by monitoring the objectives, contents, and methods of the lectures and providing feedback to the professors.


Subject(s)
Humans , Curriculum/standards , Faculty , Feedback , Peer Review , Reproducibility of Results , Seoul , Surveys and Questionnaires , Video Recording
8.
Korean Journal of Medical Education ; : 131-137, 2013.
Article in Korean | WPRIM | ID: wpr-168939

ABSTRACT

PURPOSE: This study examined the use of the Tucker linear equating method in producing an individual student's score in 3 groups with bridging stations over 3 consecutive days of the clinical performance examination (CPX) and compared the differences in scoring patterns by bridging number. METHODS: Data were drawn from 88 examinees from 3 different CPX groups-DAY1, DAY2, and DAY3-each of which comprised of 6 stations. Each group had 3 common stations, and each group had 2 or 3 stations that differed from other groups. DAY1 and DAY3 were equated to DAY2. Equated mean scores and standard deviations were compared with the originals. DAY1 and DAY3 were equated again, and the differences in scores (equated score-raw score) were compared between the 3 sets of equated scores. RESULTS: By equating to DAY2, DAY1 decreased in mean score from 58.188 to 56.549 and in standard deviation from 4.991 to 5.046, and DAY3 fell in mean score from 58.351 to 58.057 and in standard deviation from 5.546 to 5.856, which demonstrates that the scores of examinees in DAY1 and DAY2 were accentuated after use of the equation. The patterns in score differences between the equated sets to DAY1, DAY2, and DAY3 yielded information on the soundness of the equating results from individual and overall comparisons. CONCLUSION: To generate equated scores between 3 groups on 3 consecutive days of the CPX, we applied the Tucker linear equating method. We also present a method of equating reciprocal days to the anchoring day as much as bridging stations.


Subject(s)
Clinical Competence , Educational Measurement
SELECTION OF CITATIONS
SEARCH DETAIL